Skip to content

Conversation

souradeep-das
Copy link
Contributor

@souradeep-das souradeep-das commented Aug 6, 2025

What was wrong?

associated with #1145

How was it fixed?

  • add code angry_mutant for tests that cannot be run in the mutation testing suite
  • update conftest for mutmut parallel run setting
  • add explicit forced fail condition since mutmut fails attaching programmatic fail if the mutation paths do not have possible mutations

Corresponding update on mutmut run - souradeep-das/mutmut@cc99e50

mutmut has to be run with a --pytest-extra-args option.
For instance-

mutmut --paths-to-mutate src/ethereum/frontier/vm/precompiled_contracts/sha256.py \
       --tests-dir tests/json_infra/ \
       --debug \
       --pytest-extra-args '-m "not slow and not angry_mutant" --ignore-glob=tests/json_infra/fixtures/* --fork Frontier' \
       run

Cute Animal Picture

Put a link to a cute animal picture inside the parenthesis-->

Copy link
Contributor

@SamWilsn SamWilsn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good aside from the locking question. Might want to wait for #1360 to be merged though, since this one looks a bit easier to rebase.

Comment on lines 213 to 214
lock_file = session.stash[fixture_lock]
session.stash[fixture_lock] = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to avoid locking when running under mutmut?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but only when fixtures already exist downloaded - we could skip locking then.

Since mutmut has multiple phases, the fixtures exist after the first stat collection run (this phase happens before the mutation tests start parallely)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't another test run delete the fixtures folder while it's running if we don't lock?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah makes sense - is there a concern given usually the fixtures persist?
I was thinking since mutmut runs parallely - acquiring the lock for each of these process might lead to sequential startup for each of the mutation run? (which should cost some time)
But its true that if somehow fixtures are deleted, then the current workers at that time would fail

Copy link
Contributor

@SamWilsn SamWilsn Aug 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose it depends how mutmut runs in parallel. If it's entirely separate processes, without this lock, each one would blow away the fixtures directory when it starts; but with the lock it'll be serialized.

If mutmut works like xdist, then it'll still run in parallel.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In mutmut a separate pytest process is launched for each mutation, so these run independently in parallel (i think unlike xdist?) Also, from all my runs i see the fixtures persist, should we take a tradeoff then over a rare chance of deletion? Here's a CI test run - https://github.com/souradeep-das/execution-specs/actions/runs/17051785519

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See here. While the tarball is being extracted, the state on disk is uncertain.

I think we have three options: (1) have each process use its own fixtures directory; (2) use an exclusive lock during extraction, a shared lock during test runs, and write the currently extracted tests to disk somewhere; or (3) live with serialized tests.

I'd pretty strongly prefer not ignoring this issue, because (unless I'm misunderstanding) we might end up in the situation where a mutmut process only sees 10% of the tests and then reports a false positive.

@souradeep-das souradeep-das force-pushed the souradeep/mut_test_update branch from 3720b8e to 6a9c5a6 Compare August 18, 2025 19:08
Comment on lines 248 to 249
if all_fixtures_ready:
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This exits the context manager and, as I far understand portalocker, releases the lock. We need to hold the shared lock for the duration of the test run to prevent another process coming in and taking the exclusive lock.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh yess, updated to hold the shared lock entirely now. used a while true loop before, that was confusing so removed it!
also tested with deleting fixtures when running mutmut

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants